132 research outputs found

    Vibration Propagation on the Skin of the Arm

    Get PDF
    Vibrotactile interfaces are an inexpensive and non-invasive way to provide performance feedback to body-machine interface users. Interfaces for the upper extremity have utilized a multi-channel approach using an array of vibration motors placed on the upper extremity. However, for successful perception of multi-channel vibrotactile feedback on the arm, we need to account for vibration propagation across the skin. If two stimuli are delivered within a small distance, mechanical propagation of vibration can lead to inaccurate perception of the distinct vibrotactile stimuli. This study sought to characterize vibration propagation across the hairy skin of the forearm. We characterized vibration propagation by measuring accelerations at various distances from a source vibration of variable intensities (100–240 Hz). Our results showed that acceleration from the source vibration was present at a distance of 4 cm at intensities \u3e150 Hz. At distances greater than 8 cm from the source, accelerations were reduced to values substantially below vibrotactile discrimination thresholds for all vibration intensities. We conclude that in future applications of vibrotactile interfaces, stimulation sites should be separated by a distance of at least 8 cm to avoid potential interference in vibration perception caused by propagating vibrations

    Tactile-STAR: A Novel Tactile STimulator And Recorder System for Evaluating and Improving Tactile Perception

    Get PDF
    Many neurological diseases impair the motor and somatosensory systems. While several different technologies are used in clinical practice to assess and improve motor functions, somatosensation is evaluated subjectively with qualitative clinical scales. Treatment of somatosensory deficits has received limited attention. To bridge the gap between the assessment and training of motor vs. somatosensory abilities, we designed, developed, and tested a novel, low-cost, two-component (bimanual) mechatronic system targeting tactile somatosensation: the Tactile-STAR—a tactile stimulator and recorder. The stimulator is an actuated pantograph structure driven by two servomotors, with an end-effector covered by a rubber material that can apply two different types of skin stimulation: brush and stretch. The stimulator has a modular design, and can be used to test the tactile perception in different parts of the body such as the hand, arm, leg, big toe, etc. The recorder is a passive pantograph that can measure hand motion using two potentiometers. The recorder can serve multiple purposes: participants can move its handle to match the direction and amplitude of the tactile stimulator, or they can use it as a master manipulator to control the tactile stimulator as a slave. Our ultimate goal is to assess and affect tactile acuity and somatosensory deficits. To demonstrate the feasibility of our novel system, we tested the Tactile-STAR with 16 healthy individuals and with three stroke survivors using the skin-brush stimulation. We verified that the system enables the mapping of tactile perception on the hand in both populations. We also tested the extent to which 30 min of training in healthy individuals led to an improvement of tactile perception. The results provide a first demonstration of the ability of this new system to characterize tactile perception in healthy individuals, as well as a quantification of the magnitude and pattern of tactile impairment in a small cohort of stroke survivors. The finding that short-term training with Tactile-STARcan improve the acuity of tactile perception in healthy individuals suggests that Tactile-STAR may have utility as a therapeutic intervention for somatosensory deficits

    Learning to push and learning to move: The adaptive control of contact forces

    Get PDF
    To be successful at manipulating objects one needs to apply simultaneously well controlled movements and contact forces. We present a computational theory of how the brain may successfully generate a vast spectrum of interactive behaviors by combining two independent processes. One process is competent to control movements in free space and the other is competent to control contact forces against rigid constraints. Free space and rigid constraints are singularities at the boundaries of a continuum of mechanical impedance. Within this continuum, forces and motions occur in \u201ccompatible pairs\u201d connected by the equations of Newtonian dynamics. The force applied to an object determines its motion. Conversely, inverse dynamics determine a unique force trajectory from a movement trajectory. In this perspective, we describe motor learning as a process leading to the discovery of compatible force/motion pairs. The learned compatible pairs constitute a local representation of the environment's mechanics. Experiments on force field adaptation have already provided us with evidence that the brain is able to predict and compensate the forces encountered when one is attempting to generate a motion. Here, we tested the theory in the dual case, i.e., when one attempts at applying a desired contact force against a simulated rigid surface. If the surface becomes unexpectedly compliant, the contact point moves as a function of the applied force and this causes the applied force to deviate from its desired value. We found that, through repeated attempts at generating the desired contact force, subjects discovered the unique compatible hand motion. When, after learning, the rigid contact was unexpectedly restored, subjects displayed after effects of learning, consistent with the concurrent operation of a motion control system and a force control system. Together, theory and experiment support a new and broader view of modularity in the coordinated control of forces and motions

    Intermittent control with ankle, hip, and mixed strategies during quiet standing: A theoretical proposal based on a double inverted pendulum model

    Get PDF
    Abstract Human upright posture, as a mechanical system, is characterized by an instability of saddle type, involving both stable and unstable dynamic modes. The brain stabilizes such system by generating active joint torques, according to a time-delayed neural feedback control. What is still unsolved is a clear understanding of the control strategies and the control mechanisms that are used by the central nervous system in order to stabilize the unstable posture in a robust way while maintaining flexibility. Most studies in this direction have been limited to the single inverted pendulum model, which is useful for formalizing fundamental mechanical aspects but insufficient for addressing more general issues concerning neural control strategies. Here we consider a double inverted pendulum model in the sagittal plane with small passive viscoelasticity at the ankle and hip joints. Despite difficulties in stabilizing the double pendulum model in the presence of the large feedback delay, we show that robust and flexible stabilization of the upright posture can be established by an intermittent control mechanism that achieves the goal of stabilizing the body posture according to a "divide and conquer strategy", which switches among different controllers in different parts of the state space of the double inverted pendulum. Remarkably, it is shown that a global, robust stability is achieved even if the individual controllers are unstable and the information exploited for switching from one controller to another is severely delayed, as it happens in biological reality. Moreover, the intermittent controller can automatically resolve coordination among multiple active torques associated with the muscle synergy, leading to the emergence of distinct temporally coordinated active torque patterns, referred to as the intermittent ankle, hip, and mixed strategies during quiet standing, depending on the passive elasticity at the hip joint

    Sensory Motor Remapping of Space in Human-Machine Interfaces

    Get PDF
    Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human–machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices

    3D functional sport prostheses

    Get PDF
    3D printing techniques are making rapid improvements, especially in the medical field, where many people are requesting prosthetic devices that are customized in design and function. This is happening especially in sports to facilitate access for the disabled, providing more affordable devices suitable for those who want to play sports without having Olympic ambitions. In this study we present some 3D printed sport devices, for swimming and cycling, developed by the Io Do Una Mano Association, the official Italian chapter of e-Nable. Each device presented has been customized for specific disabilities and is based on different requests

    Virtual and Augmented Reality in Basic and Advanced Life Support Training

    Get PDF
    The use of augmented reality (AR) and virtual reality (VR) for life support training is increasing. These technologies provide an immersive experience that supports learning in a safe and controlled environment. This review focuses on the use of AR and VR for emergency care training for health care providers, medical students, and nonprofessionals. In particular, we analyzed (1) serious games, nonimmersive games, both single-player and multiplayer; (2) VR tools ranging from semi-immersive to immersive virtual and mixed reality; and (3) AR applications. All the toolkits have been investigated in terms of application goals (training, assessment, or both), simulated procedures, and skills. The main goal of this work is to summarize and organize the findings of studies coming from multiple research areas in order to make them accessible to all the professionals involved in medical simulation. The analysis of the state-of-the-art technologies reveals that tools and studies related to the multiplayer experience, haptic feedback, and evaluation of user’s manual skills in the foregoing health care-related environments are still limited and require further investigation. Also, there is an additional need to conduct studies aimed at assessing whether AR/VR-based systems are superior or, at the minimum, comparable to traditional training methods

    Using the Functional Reach Test for Probing the Static Stability of Bipedal Standing in Humanoid Robots Based on the Passive Motion Paradigm

    Get PDF
    The goal of this paper is to analyze the static stability of a computational architecture, based on the Passive Motion Paradigm, for coordinating the redundant degrees of freedom of a humanoid robot during whole-body reaching movements in bipedal standing. The analysis is based on a simulation study that implements the Functional Reach Test, originally developed for assessing the danger of falling in elderly people. The study is carried out in the YARP environment that allows realistic simulations with the iCub humanoid robot

    Bimanual Motor Strategies and Handedness Role During Human-Exoskeleton Haptic Interaction

    Full text link
    Bimanual object manipulation involves multiple visuo-haptic sensory feedbacks arising from the interaction with the environment that are managed from the central nervous system and consequently translated in motor commands. Kinematic strategies that occur during bimanual coupled tasks are still a scientific debate despite modern advances in haptics and robotics. Current technologies may have the potential to provide realistic scenarios involving the entire upper limb extremities during multi-joint movements but are not yet exploited to their full potential. The present study explores how hands dynamically interact when manipulating a shared object through the use of two impedance-controlled exoskeletons programmed to simulate bimanually coupled manipulation of virtual objects. We enrolled twenty-six participants (2 groups: right-handed and left-handed) who were requested to use both hands to grab simulated objects across the robot workspace and place them in specific locations. The virtual objects were rendered with different dynamic proprieties and textures influencing the manipulation strategies to complete the tasks. Results revealed that the roles of hands are related to the movement direction, the haptic features, and the handedness preference. Outcomes suggested that the haptic feedback affects bimanual strategies depending on the movement direction. However, left-handers show better control of the force applied between the two hands, probably due to environmental pressures for right-handed manipulations
    • …
    corecore